Skip to content

Implement lemonade provider for Codex#1505

Open
sawansri wants to merge 11 commits intomainfrom
sawansri/codex-fix
Open

Implement lemonade provider for Codex#1505
sawansri wants to merge 11 commits intomainfrom
sawansri/codex-fix

Conversation

@sawansri
Copy link
Copy Markdown
Collaborator

@sawansri sawansri commented Apr 1, 2026

Resolves #1504

Also introduces --agent-args allowing users to pass in arguments to their agents, similar to how we used to do --llamacpp-args. This enables features such as session resumption in both Claude Code and Codex.

sawansri added 9 commits April 1, 2026 11:27
Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
…ndle this)

Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
@sawansri sawansri marked this pull request as ready for review April 1, 2026 19:00
@sawansri sawansri requested a review from jeremyfowers April 1, 2026 19:00
@@ -451,6 +451,8 @@
| `--model MODEL_NAME` | Model name to launch with. If omitted, you will be prompted to select one. | No |
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image

server log

2026-04-01 15:16:07.111 [Info] (Router) Model loaded successfully. Total loaded: 1
2026-04-01 15:16:07.111 [Info] (Server) Model loaded successfully: user.Qwen3.5-35B-A3B-NoThinking
2026-04-01 15:16:07.111 [Info] (Server) POST /api/v1/responses - Streaming
2026-04-01 15:16:07.119 [Error] (Process) srv    operator(): got exception: {"error":{"code":500,"message":"\n------------\nWhile executing CallExpression at line 85, column 32 in source:\n...first %}↵            {{- raise_exception('System message must be at the beginnin...\n                                           ^\nError: Jinja Exception: System message must be at the beginning.","type":"server_error"}}
2026-04-01 15:16:07.119 [Info] (Process) srv  log_server_r: done request: POST /v1/responses 127.0.0.1 500
2026-04-01 15:16:07.119 [Error] (StreamingProxy) Backend returned error: 500
2026-04-01 15:16:07.333 [Info] (Server) Model already loaded: user.Qwen3.5-35B-A3B-NoThinking
2026-04-01 15:16:07.333 [Info] (Server) POST /api/v1/responses - Streaming
2026-04-01 15:16:07.340 [Error] (Process) srv    operator(): got exception: {"error":{"code":500,"message":"\n------------\nWhile executing CallExpression at line 85, column 32 in source:\n...first %}↵            {{- raise_exception('System message must be at the beginnin...\n                                           ^\nError: Jinja Exception: System message must be at the beginning.","type":"server_error"}}
2026-04-01 15:16:07.340 [Info] (Process) srv  log_server_r: done request: POST /v1/responses 127.0.0.1 500
2026-04-01 15:16:07.340 [Error] (StreamingProxy) Backend returned error: 500
2026-04-01 15:16:07.770 [Info] (Server) Model already loaded: user.Qwen3.5-35B-A3B-NoThinking
2026-04-01 15:16:07.770 [Info] (Server) POST /api/v1/responses - Streaming
2026-04-01 15:16:07.776 [Error] (Process) srv    operator(): got exception: {"error":{"code":500,"message":"\n------------\nWhile executing CallExpression at line 85, column 32 in source:\n...first %}↵            {{- raise_exception('System message must be at the beginnin...\n                                           ^\nError: Jinja Exception: System message must be at the beginning.","type":"server_error"}}
2026-04-01 15:16:07.776 [Info] (Process) srv  log_server_r: done request: POST /v1/responses 127.0.0.1 500
2026-04-01 15:16:07.776 [Error] (StreamingProxy) Backend returned error: 500
2026-04-01 15:16:08.660 [Info] (Server) Model already loaded: user.Qwen3.5-35B-A3B-NoThinking
2026-04-01 15:16:08.660 [Info] (Server) POST /api/v1/responses - Streaming
2026-04-01 15:16:08.666 [Error] (Process) srv    operator(): got exception: {"error":{"code":500,"message":"\n------------\nWhile executing CallExpression at line 85, column 32 in source:\n...first %}↵            {{- raise_exception('System message must be at the beginnin...\n                                           ^\nError: Jinja Exception: System message must be at the beginning.","type":"server_error"}}
2026-04-01 15:16:08.666 [Info] (Process) srv  log_server_r: done request: POST /v1/responses 127.0.0.1 500
2026-04-01 15:16:08.666 [Error] (StreamingProxy) Backend returned error: 500
2026-04-01 15:16:10.128 [Info] (Server) Model already loaded: user.Qwen3.5-35B-A3B-NoThinking
2026-04-01 15:16:10.128 [Info] (Server) POST /api/v1/responses - Streaming
2026-04-01 15:16:10.134 [Error] (Process) srv    operator(): got exception: {"error":{"code":500,"message":"\n------------\nWhile executing CallExpression at line 85, column 32 in source:\n...first %}↵            {{- raise_exception('System message must be at the beginnin...\n                                           ^\nError: Jinja Exception: System message must be at the beginning.","type":"server_error"}}
2026-04-01 15:16:10.134 [Info] (Process) srv  log_server_r: done request: POST /v1/responses 127.0.0.1 500
2026-04-01 15:16:10.134 [Error] (StreamingProxy) Backend returned error: 500
2026-04-01 15:16:13.647 [Info] (Server) Model already loaded: user.Qwen3.5-35B-A3B-NoThinking
2026-04-01 15:16:13.647 [Info] (Server) POST /api/v1/responses - Streaming
2026-04-01 15:16:13.652 [Error] (Process) srv    operator(): got exception: {"error":{"code":500,"message":"\n------------\nWhile executing CallExpression at line 85, column 32 in source:\n...first %}↵            {{- raise_exception('System message must be at the beginnin...\n                                           ^\nError: Jinja Exception: System message must be at the beginning.","type":"server_error"}}
2026-04-01 15:16:13.652 [Info] (Process) srv  log_server_r: done request: POST /v1/responses 127.0.0.1 500
2026-04-01 15:16:13.652 [Error] (StreamingProxy) Backend returned error: 500

Codex isnt working for me, any tips?

Copy link
Copy Markdown
Collaborator Author

@sawansri sawansri Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Appears to be related to: ggml-org/llama.cpp#20733. There's a draft PR open to address this: ggml-org/llama.cpp#21174.

Looks like an upstream issue with the qwen3.5 model family. I've been testing with GLM 4.7 Flash and Nemotron 3 Nano, which probably explains why I haven't hit this issue.

Copy link
Copy Markdown
Member

@jeremyfowers jeremyfowers Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I'll try again with GLM. Should Qwen3.5 not be a recommended recipe for Codex then?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works for me with GLM! So yeah the recommended recipe list just needs to be adjusted.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works for me with GLM! So yeah the recommended recipe list just needs to be adjusted.

Interestingly, Qwen 3 Coder Next works fine for me, issue is mainly limited to the Qwen 3.5 family. Will remove them from the recommended list.

For those that have already downloaded Qwen 3.5 models, do you think we should add a warning for them as well?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

Added warnings for codex users

lemonade launch AGENT [--model MODEL_NAME] [options]
```

| Option/Argument | Description | Required |
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"This enables features such as session resumption in both Claude Code and Codex."

Can you provide instructions for how to do this? If I use the regular resume command in Codex it resumes, but with ChatGPT as the model.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To resume from a previous session, command syntax would look something like this, lemonade launch codex --agent-args "resume SESSION_ID", this will automatically pick up the lemonade provider as default and should not route you to the OpenAI provider.

I will add an entry in the docs for this.

sawansri added 2 commits April 2, 2026 08:43
Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
…odels

Signed-off-by: Sawan Srivastava <sawan1210@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Codex expects a Lemonade instance running at port 11434

2 participants